Attacking Binarized Neural Networks
نویسندگان
چکیده
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.
منابع مشابه
Binarized Neural Networks on the ImageNet Classification Task
We trained Binarized Neural Networks (BNNs) on the high resolution ImageNet ILSVRC-2102 dataset classification task and achieved a good performance. With a moderate size network of 13 layers, we obtained top-5 classification accuracy rate of 84.1 percent on validation set through network distillation, much better than previous published results of 73.2% on XNOR network and 69.1% on binarized Go...
متن کاملVerifying Properties of Binarized Deep Neural Networks
Understanding properties of deep neural networks is an important challenge in deep learning. In this paper, we take a step in this direction by proposing a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability. Our main contribution is a construction that creates a representation of a binar...
متن کاملBinarized Neural Networks
In this work we introduce a binarized deep neural network (BDNN) model. BDNNs are trained using a novel binarized back propagation algorithm (BBP), which uses binary weights and binary neurons during the forward and backward propagation, while retaining precision of the stored weights in which gradients are accumulated. At test phase, BDNNs are fully binarized and can be implemented in hardware...
متن کاملVerification of Binarized Neural Networks
We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energyefficient alternative to traditional learning networks. The verification of BNNs, using the reduction to hardware verification, can be even more scalable by factoring computations among neurons within the same layer. By proving the NP-hardness of finding optimal factoring...
متن کاملDiscretization of analog communication signals by noise addition in reinforcement learning of communication
Towards the unified processing of symbols and patterns by neural networks, it was examined that symbols emerge using neural networks that is trained only by reinforcement learning. A very simple communication-learning task was assumed, and some noise is added to the communication signals. After learning, as the noise level during learning became larger, the communication signals were binarized ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1711.00449 شماره
صفحات -
تاریخ انتشار 2017